Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.406
Filtrar
2.
J Med Syst ; 47(1): 86, 2023 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-37581690

RESUMEN

ChatGPT, a language model developed by OpenAI, uses a 175 billion parameter Transformer architecture for natural language processing tasks. This study aimed to compare the knowledge and interpretation ability of ChatGPT with those of medical students in China by administering the Chinese National Medical Licensing Examination (NMLE) to both ChatGPT and medical students. We evaluated the performance of ChatGPT in three years' worth of the NMLE, which consists of four units. At the same time, the exam results were compared to those of medical students who had studied for five years at medical colleges. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was related to the year in which the exam questions were released. ChatGPT's knowledge and interpretation ability for the NMLE were not yet comparable to those of medical students in China. It is probable that these abilities will improve through deep learning.


Asunto(s)
Inteligencia Artificial , Evaluación Educacional , Concesión de Licencias , Medicina , Estudiantes de Medicina , Humanos , Pueblo Asiatico , China , Conocimiento , Lenguaje , Medicina/normas , Concesión de Licencias/normas , Estudiantes de Medicina/estadística & datos numéricos , Evaluación Educacional/normas
3.
Educ. med. super ; 36(2)jun. 2022. ilus, tab
Artículo en Español | LILACS, CUMED | ID: biblio-1404547

RESUMEN

Introducción: La formación de los especialistas médico-quirúrgicos (residentes) se lleva a cabo en hospitales donde confluyen actividades asistenciales y de enseñanza-aprendizaje. El conocimiento sobre este ambiente dual es fundamental para identificar oportunidades para optimizar la calidad y efectividad de ambas actividades. Objetivo: Construir una escala para medir la percepción del ambiente de enseñanza-aprendizaje en la práctica clínica de los residentes en formación en Colombia. Métodos: Se diseñó una escala tipo Likert, que adaptó la guía de la Association for Medical Education in Europe Developing Questionnaires For Educational Research, con los siguientes pasos: revisión de literatura, revisión de la normatividad colombiana con respecto a los hospitales universitarios, síntesis de la evidencia, desarrollo de los ítems, validación de apariencia por expertos y aplicación del cuestionario a residentes. Resultados: Se construyó la escala de Ambiente de la Práctica Clínica (EAPRAC) sobre la base de la teoría educativa de la actividad y del aprendizaje situado en el lugar de trabajo. Inicialmente, se definieron 46 preguntas y, posterior a la validación de apariencia, se conformaron 39 ítems distribuidos en siete dominios: procesos académicos, docentes, convenios docencia-servicio, bienestar, infraestructura académica, infraestructura asistencial y organización y gestión. La aplicación de esta escala a residentes no mostró problemas de comprensión, motivo por el cual no fue necesario depurar la cantidad ni el contenido de los ítems. Conclusiones: La escala construida tiene validez de apariencia por los pares expertos y los residentes, lo que permite que en una fase posterior se le realice la validez de contenido y reproducibilidad(AU)


Introduction: The training of medical-surgical specialists (residents) takes place in hospitals where healthcare and teaching-learning activities converge. Knowledge about this dual setting is essential for identifying opportunities to optimize the quality and effectiveness of both activities. Objective: To construct a scale for measuring the perception about the teaching-learning environment in the clinical practice of residents who receive training in Colombia. Methods: A Likert-type scale was designed as an adapted form of the guide Developing Questionnaires for Educational Research, presented by the Association for Medical Education in Europe, with the following steps: literature review, review of Colombian regulations regarding university hospitals, synthesis of evidence, development of items, validation of appearance by experts, and questionnaire application to residents. Results: A clinical practice environment scale was constructed on the basis of the educational theory of activity and learning situated in the workplace. Initially, 46 questions were defined and, after the validation of appearance, 39 items distributed in seven domains were created: academic processes, teaching processes, teaching-service agreements, welfare, academic infrastructure, care infrastructure, and management and organization. The application of this scale to residents showed no comprehension problems; therefore, it was not necessary to refine the number or content of the items. Conclusions: The scale constructed has validity of appearance by expert peers and residents, which allows, in further stages, to carry out content validity and reproducibility(AU)


Asunto(s)
Humanos , Enseñanza , Conocimiento , Aprendizaje , Gestión en Salud , Educación Médica , Evaluación Educacional/normas , Estudios de Evaluación como Asunto , Hospitales/normas
4.
Med Teach ; 44(4): 353-359, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35104191

RESUMEN

Health professions education has undergone significant changes over the last few decades, including the rise of competency-based medical education, a shift to authentic workplace-based assessments, and increased emphasis on programmes of assessment. Despite these changes, there is still a commonly held assumption that objectivity always leads to and is the only way to achieve fairness in assessment. However, there are well-documented limitations to using objectivity as the 'gold standard' to which assessments are judged. Fairness, on the other hand, is a fundamental quality of assessment and a principle that almost no one contests. Taking a step back and changing perspectives to focus on fairness in assessment may help re-set a traditional objective approach and identify an equal role for subjective human judgement in assessment alongside objective methods. This paper explores fairness as a fundamental quality of assessments. This approach legitimises human judgement and shared subjectivity in assessment decisions alongside objective methods. Widening the answer to the question: 'What is fair assessment' to include not only objectivity but also expert human judgement and shared subjectivity can add significant value in ensuring learners are better equipped to be the health professionals required of the 21st century.


Asunto(s)
Educación Basada en Competencias , Evaluación Educacional/métodos , Evaluación Educacional/normas , Empleos en Salud/educación , Lugar de Trabajo , Humanos , Juicio
9.
Am J Surg ; 222(6): 1112-1119, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34600735

RESUMEN

BACKGROUND: The American Board of Surgery has mandated chief residents complete 25 cases in the teaching assistant (TA) role. We developed a structured instrument, the Teaching Evaluation and Assessment of the Chief Resident (TEACh-R), to determine readiness and provide feedback for residents in this role. METHODS: Senior (PGY3-5) residents were scored on technical and teaching performance by faculty observers using the TEACh-R instrument in the simulation lab. Residents were provided with their TEACh-R scores and surveyed on their experience. RESULTS: Scores in technical (p < 0.01) and teaching (p < 0.01) domains increased with PGY. Higher technical, but not teaching, scores correlated with attending-rated readiness for operative independence (p 0.02). Autonomy mismatch was inversely correlated with teaching competence (p < 0.01). Residents reported satisfaction with TEACh-R feedback and desire for use of this instrument in operating room settings. CONCLUSION: Our TEACh-R instrument is an effective way to assess technical and teaching performance in the TA role.


Asunto(s)
Internado y Residencia/organización & administración , Enseñanza/normas , Evaluación Educacional/normas , Humanos , Internado y Residencia/métodos , Internado y Residencia/normas , Reproducibilidad de los Resultados , Encuestas y Cuestionarios
10.
Am J Surg ; 222(6): 1163-1166, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34602278

RESUMEN

BACKGROUND: This study aims to determine if there are correlations between clinical performance and objective grading parameters for medical students in the third-year surgery clerkship. METHODS: Clerkship grades were compiled from 2016 to 2020. Performance on clinical rotations, NBME shelf exam, oral exam, and weekly quizzes were reviewed. Students were divided into quartiles (Q1-Q4) based on clinical performance. Standard statistical analysis was performed. RESULTS: There were 625 students included in the study. Students in Q1+Q2 were more likely than those in Q3+Q4 to score in the top quartile on the shelf exam (29% vs. 19%, p = 0.002), oral exam (24% vs. 17%, p = 0.032), and quizzes (22% vs. 15%, p = 0.024). However, there was negligible correlation between clinical performance and performance on objective measures: shelf exam (R2 = 0.027, p < 0.001), oral exam (R2 = 0.021, p < 0.001), and weekly quizzes (R2 = 0.053, p = 0.092). CONCLUSIONS: Clinical performance does not correlate with objective grading parameters for medical students in the third-year surgery clerkship.


Asunto(s)
Prácticas Clínicas/normas , Competencia Clínica , Evaluación Educacional , Cirugía General/educación , Prácticas Clínicas/estadística & datos numéricos , Competencia Clínica/normas , Competencia Clínica/estadística & datos numéricos , Evaluación Educacional/normas , Evaluación Educacional/estadística & datos numéricos , Humanos
13.
Anesth Analg ; 133(5): 1331-1341, 2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34517394

RESUMEN

In 2020, the coronavirus disease 2019 (COVID-19) pandemic interrupted the administration of the APPLIED Examination, the final part of the American Board of Anesthesiology (ABA) staged examination system for initial certification. In response, the ABA developed, piloted, and implemented an Internet-based "virtual" form of the examination to allow administration of both components of the APPLIED Exam (Standardized Oral Examination and Objective Structured Clinical Examination) when it was impractical and unsafe for candidates and examiners to travel and have in-person interactions. This article describes the development of the ABA virtual APPLIED Examination, including its rationale, examination format, technology infrastructure, candidate communication, and examiner training. Although the logistics are formidable, we report a methodology for successfully introducing a large-scale, high-stakes, 2-element, remote examination that replicates previously validated assessments.


Asunto(s)
Anestesiología/educación , COVID-19/epidemiología , Certificación/métodos , Instrucción por Computador/métodos , Evaluación Educacional/métodos , Consejos de Especialidades , Anestesiología/normas , COVID-19/prevención & control , Certificación/normas , Competencia Clínica/normas , Instrucción por Computador/normas , Evaluación Educacional/normas , Humanos , Internado y Residencia/métodos , Internado y Residencia/normas , Consejos de Especialidades/normas , Estados Unidos/epidemiología
16.
An. psicol ; 37(2): 287-297, mayo-sept. 2021. tab
Artículo en Español | IBECS | ID: ibc-202552

RESUMEN

Actualmente, el engagement educativo se considera uno de los factores más importantes a la hora de predecir un buen aprendizaje por parte de los estudiantes, así como su éxito educativo. Sin embargo, la mayoría de los instrumentos descritos, no incluyen todos los factores clave vinculados al engagement académico: motivaciones, valores, contextos de aprendizaje, estado emocional y estrategias de gestión. El objetivo de este estudio es desarrollar una escala para valorar el nivel de engagement educativo de los estudiantes en Educación Superior (EMMEE) que supere esta limitación. MÉTODO: Se realizan análisis factoriales exploratorio y confirmatorio, así como un estudio de la consistencia interna, validez convergente y discriminante en una muestra de 764 estudiantes de la Universidad de Sevilla (España), perteneciente a todas las áreas de saber y los diferentes cursos de grados. RESULTADOS: Se explora y se confirma con muy buen nivel de ajuste una estructura multifactorial de engagement educativo de cinco factores que explican una varianza cercana al 65.78%, con una excelente consistencia interna (α = .91) y con indicios significativos de validez convergente y discriminante. CONCLUSIONES: Se concluye que la EMMEE es un instrumento válido y fiable para medir el nivel de engagement de las aulas, así como mejorar el entendimiento del constructo a través de sus factores


Today, educational engagement is considered one of the most important factors in predicting good student learning and educational success. However, most of the instruments described do not include all the key factors linked to academic engagement: motivations, values, learning contexts, emotional state and management strategies. The aim of this study is to develop a scale to assess the level of educational engagement in High-er Education students (MMSEE) that overcomes this limitation. METHODS: Exploratory and confirmatory factorial analyses, as well as a study of internal consistency, convergent and discriminant validity, were carried out on a sample of 764 students from the University of Seville (Spain), belonging to all areas of knowledge and different degree courses. RESULTS: A multifactorial structure of educational engagement with five factors that explain a variance close to 65.78%, with an excellent internal consistency (α = .91) and with significant indicators of convergent and discriminant validity is explored and confirmed with a very good level of adjustment. CONCLUSIONS: It is concluded that MMSEE is a valid and reliable instrument to measure the level of engagement of classrooms, as well as to improve the under-standing of the construct through its factors


Asunto(s)
Humanos , Masculino , Femenino , Adulto Joven , Aprendizaje , Motivación , Encuestas y Cuestionarios/normas , Evaluación Educacional/normas , Análisis Factorial , Reproducibilidad de los Resultados , Valores de Referencia
17.
Acad Med ; 96(11S): S136-S143, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34348376

RESUMEN

PURPOSE: To identify the content of an educational handover letter from undergraduate to graduate education in General Surgery. METHOD: Expert consensus was attained on the content of an educational handover letter. A 3-stage Delphi technique was employed with 8 experts in each of 4 stakeholder groups: program directors in general surgery, medical student surgical acting internship or prep course directors, authors of medical student performance evaluations, and current categorical General Surgery residents. Data were collected from April through July 2019. A mixed method analysis was performed to quantitatively assess items selected for inclusion and qualitatively provide guidance for the implantation of such a letter. RESULTS: All 32 experts participated in at least one round. Of the 285 initially identified individual items, 22 were ultimately selected for inclusion in the letter. All but one expert agreed that the list represents what the content of an educational handover letter in General Surgery should be. Qualitative analysis was performed on 395 comments and identified 4 themes to guide the implementation of the letter: "minimize redundancy, optimize impact, use appropriate assessments, and mitigate risk." CONCLUSIONS: A framework and proposed template are provided for an educational handover letter from undergraduate to graduate medical education in General Surgery based on the quantitative and qualitative analysis of expert consensus of major stakeholders. This letter holds promise to enhance the transition from undergraduate to graduate medical education by allowing programs to capitalize on strengths and efficiently address knowledge gaps in new trainees.


Asunto(s)
Competencia Clínica/normas , Correspondencia como Asunto , Educación de Postgrado en Medicina , Evaluación Educacional/normas , Cirugía General/educación , Técnica Delfos , Humanos , Internado y Residencia , Estados Unidos
18.
PLoS One ; 16(8): e0254340, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34347794

RESUMEN

The COVID-19 pandemic has impelled the majority of schools and universities around the world to switch to remote teaching. One of the greatest challenges in online education is preserving the academic integrity of student assessments. The lack of direct supervision by instructors during final examinations poses a significant risk of academic misconduct. In this paper, we propose a new approach to detecting potential cases of cheating on the final exam using machine learning techniques. We treat the issue of identifying the potential cases of cheating as an outlier detection problem. We use students' continuous assessment results to identify abnormal scores on the final exam. However, unlike a standard outlier detection task in machine learning, the student assessment data requires us to consider its sequential nature. We address this issue by applying recurrent neural networks together with anomaly detection algorithms. Numerical experiments on a range of datasets show that the proposed method achieves a remarkably high level of accuracy in detecting cases of cheating on the exam. We believe that the proposed method would be an effective tool for academics and administrators interested in preserving the academic integrity of course assessments.


Asunto(s)
Educación a Distancia , Evaluación Educacional , Fraude , Detección de Mentiras , Aprendizaje Automático , Algoritmos , COVID-19/epidemiología , Conjuntos de Datos como Asunto , Decepción , Educación a Distancia/métodos , Educación a Distancia/organización & administración , Evaluación Educacional/métodos , Evaluación Educacional/normas , Humanos , Modelos Teóricos , Pandemias , SARS-CoV-2 , Universidades
19.
Surgery ; 170(6): 1652-1658, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34272045

RESUMEN

BACKGROUND: In surgical training, assessment tools based on strong validity evidence allow for standardized evaluation despite changing external circumstances. At a large academic institution, surgical interns undergo a multimodal curriculum for central line placement that uses a 31-item binary assessment at the start of each academic year. This study evaluated this practice within increased in-person learning restrictions. We hypothesized that external constraints would not affect resident performance nor assessment due to a robust curriculum and assessment checklist. METHODS: From 2018 to 2020, 81 residents completed central line training and assessment. In 2020, this curriculum was modified to conform to in-person restrictions and social distancing guidelines. Resident score reports were analyzed using multivariate analyses to compare performance, objective scoring parameters, and subjective assessments among "precoronavirus disease" years (2018 and 2019) and 2020. RESULTS: There were no significant differences in average scores or objective pass rates over 3 years. Significant differences between 2020 and precoronavirus disease years occurred in subjective pass rates and in first-time success for 4 checklist items: patient positioning, draping, sterile ultrasound probe cover placement, and needle positioning before venipuncture. CONCLUSION: Modifications to procedural training within current restrictions did not adversely affect residents' overall performance. However, our data suggest that in 2020, expert trainers may not have ensured learner acquisition of automated procedural steps. Additionally, although 2020 raters could have been influenced by logistical barriers leading to more lenient grading, the assessment tool ensured training and assessment integrity.


Asunto(s)
Cateterismo Venoso Central/normas , Evaluación Educacional/estadística & datos numéricos , Cirugía General/educación , COVID-19 , Evaluación Educacional/normas , Cirugía General/normas , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...